Learning in cortical networks through error back-propagation
نویسندگان
چکیده
To efficiently learn from feedback, the cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error back-propagation. It has been successfully used in both machine learning and modelling of the brain’s cognitive functions. However, in the back-propagation algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified. Hence it has not been known if it can be implemented in biological neural networks. Here we analyse relationships between the back-propagation algorithm and the predictive coding model of information processing in the cortex. We show that when the predictive coding model is used for supervised learning, it performs very similar computations to the backpropagation algorithm. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the back-propagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
منابع مشابه
Estimating of Scour in Downstream of the Water Level Regulation Structures
Scour in the downstream of hydraulic structures is a phenomenon which usually occurs due to exceeding the velocity or shear stress from a critical level. In this paper by using the laboratory data by Borman- Jouline and De-Agostino research, it was tried to get more accurate equations in order to calculate the maximum depth of scour in the downstream of the water level regulation structures. Co...
متن کاملGeoid Determination Based on Log Sigmoid Function of Artificial Neural Networks: (A case Study: Iran)
A Back Propagation Artificial Neural Network (BPANN) is a well-known learning algorithmpredicated on a gradient descent method that minimizes the square error involving the networkoutput and the goal of output values. In this study, 261 GPS/Leveling and 8869 gravity intensityvalues of Iran were selected, then the geoid with three methods “ellipsoidal stokes integral”,“BPANN”, and “collocation” ...
متن کاملDirect Feedback Alignment Provides Learning in Deep Neural Networks
Artificial neural networks are most commonly trained with the back-propagation algorithm, where the gradient for learning is provided by back-propagating the error, layer by layer, from the output layer to the hidden layers. A recently discovered method called feedback-alignment shows that the weights used for propagating the error backward don’t have to be symmetric with the weights used for p...
متن کاملOn the use of back propagation and radial basis function neural networks in surface roughness prediction
Various artificial neural networks types are examined and compared for the prediction of surface roughness in manufacturing technology. The aim of the study is to evaluate different kinds of neural networks and observe their performance and applicability on the same problem. More specifically, feed-forward artificial neural networks are trained with three different back propagation algorithms, ...
متن کاملNeural Network Performance Analysis for Real Time Hand Gesture Tracking Based on Hu Moment and Hybrid Features
This paper presents a comparison study between the multilayer perceptron (MLP) and radial basis function (RBF) neural networks with supervised learning and back propagation algorithm to track hand gestures. Both networks have two output classes which are hand and face. Skin is detected by a regional based algorithm in the image, and then networks are applied on video sequences frame by frame in...
متن کامل